The Loss of Orthogonal i ty in the Gram - Schmidt Orthogonal izat ion Process
نویسنده
چکیده
K e y w o r d s N u m e r i c a l linear algebra, QR factorization, Gram-Schmidt orthogonalization, Reorthogonalization, Rounding error analysis. 1. I N T R O D U C T I O N Scientific comput ing and ma themat i ca l models in engineering are becoming increasingly dependent upon development and implementa t ion of efficient paral le l a lgor i thms on modern high performance computers . Numerical methods and in par t icu lar a lgor i thms of numerical l inear a lgebra represent the most widely used computa t iona l tools in such area. Ma t r i x computa t ions such as the solut ion of sys tems of l inear equations, least squares problems, s ingular value decomposi t ion, and algebraic eigenvalue problems, govern the performance of many appl ica t ions on vector and paral lel computers . In a lmost all of them, one can frequently meet as a fundamenta l subproblem the or thogonal basis problem, i.e., the problem to const ruct or to compute an or thogonal basis of some linear subspace or a space genera ted by column vectors of an associa ted rec tangular matr ix . In th is paper , we consider the Gram-Schmid t or thogonal iza t ion process, the most widely known and used representa t ive of a broad class of or thogonal izat ion techniques and s t ra tegies (for a deep The work of the third author was supported by the project 1ET400300415 within the National Program of Research "Information Society" and by the Institutional Research Plan AVOZ10300504 "Computer Science for the Information Society: Models, Algorithms, Applications". 0898-1221/05/$ see front matter (~) 2005 Elsevier Ltd. All rights reserved. Typeset by .AA/r.q-TEX doi: 10.1016/j.camwa.2005.08.009 1070 L. GIRARD e~ al. survey, we refer to [1-3]). In particular, we consider its classical and modified variant together with their variants with reorthogonalization. We will examine the level of orthogonality among the computed basis vectors produced by these schemes in connection with some characteristics of the initial column matrix A such as its dimensions or condition number ~(A). Then, we use these results in the context of Arnoldi process for constructing an orthogonal basis of a sequence of associated Krylov subspaces. Presented results will lead to important conclusions about parallel implementation and efficiency of computational variants of the Gram-Schmidt algorithm. The organization of the paper is as follows. Section 2 briefly recalls the Gram-Schmidt algorithm for a rectangular matrix A and gives an overview of basic results on the orthogonality of computed vectors developed for its different variants. In particular, we focus on recent roundoff analysis of the Gram-Schmidt algorithm with reorthogonalization. In Section 3, we consider the Arnoldi process based on four different orthogonalization schemes, namely, the classical and modified Gram-Schmidt orthogonalizations and their variants with reorthogonalization. Theoretical results are illustrated by numerical experiments on a real-world problem from the Harwell-Boeing collection. Throughout this paper, IIX]] denotes the 2-norm (spectral norm) of matrix X, amin(X) stands for its minimal singular value and ][x]] denotes the Euclidean norm of vector x. The condition number of X is denoted by ~(X) and it is defined as ~(X) = ][XI]/an~in(X ). 2. L O S S O F O R T H O G O N A L I T Y I N Q R F A C T O R I Z A T I O N Let A = ( a l , . . . , an) be a real m × n matrix (m > n) with full column rank (rank(A) = n). The Gram-Schmidt (GS) orthogonalization process [4] produces an orthogonal basis Q = (q l , . . . , qn) of span(A), such that A -QR, where R is upper triangular matrix of order n. The orthogonal matrix Q is constructed successively column-by-column so that for each j -1 , . . . , n, we have Qj = (ql , -qj ) and span(q l , . . . , q j ) -s p a n ( a l , . . . , a j ) . For the purpose of QR factorization of a matrix, many orthogonalization algorithms and techniques have been proposed and are widely used, including those based on Householder transformations or Givens rotations (see, e.g., [1-3]). Also several computational variants of the Gram-Schmidt process have been proposed and analyzed. Considerably less attention, however, has been paid to their numerical stability. Indeed, their numerical behavior can significantly differ leading sometimes to a severe loss of orthogonality or even to the loss of linear independence of computed vectors. One of the first methods for successive orthogonalization is the classical Gram-Schmidt algorithm (CGS) [1]. It was confirmed by many numerical experiments, that this technique may produce a set of vectors which is far from orthogonal and sometimes the orthogonality can be lost completely [5]. Nevertheless, despite its weakness, this technique is frequently considered and implemented, probably due to its simplicity and potential parallelism, which will be discussed later. The brief sketch of the classical Gram-Schmidt algorithm can be found in Table 1. A simple change in the loop of the CGS scheme leads to the modified Gram-Schmidt algorithm (MGS) with better numerical properties which are also much better understood (see the MGS algorithm on Figure 1 and/or [1,3]). Indeed, BjSrck [5] and later BjSrck and Paige [6] have shown that at iteration step j = 1 , . . . , n the loss of orthogonality of vectors ~)j computed in the Table 1. The classical and modified Gram-Schmidt algorithm. Classical GS Algorithm Modified GS Algorithm for j ~1 , . . . , n a~ 1) ---aj for k = 1 . . . . . j 1 a~ 1) = a~ 1) -(aj, qk)qk end = I1o;"11 end forj ~ 1, . . . ,n a~ 1) : aj for k ~1 , . . . , j 1
منابع مشابه
ON THE CONTINUITY OF PROJECTIONS AND A GENERALIZED GRAM-SCHMIDT PROCESS
Let ? be an open connected subset of the complex plane C and let T be a bounded linear operator on a Hilbert space H. For ? in ? let e the orthogonal projection onto the null-space of T-?I . We discuss the necessary and sufficient conditions for the map ?? to b e continuous on ?. A generalized Gram- Schmidt process is also given.
متن کاملEnhanced Gram-Schmidt Process for Improving the Stability in Signal and Image Processing
The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly independent vectors) into an orthonormal basis (a set of orthogonal, unit-length vectors). The process consists of taking each vector and then subtracting the elements in common with the previous vectors. This paper introduces an Enhanced version of the Gram-Schmidt Process (EGSP) with inverse, which is ...
متن کاملInfluence of Water Cooling on Orthogonal Cutting Process of Ti-6Al-4V Using Smooth-Particle Hydrodynamics Method
Temperature control during the cutting process with different parameters such as cutting velocity and applying water cooling is essential to decrease the cutting force, increase the life of the cutting tool and decrease the machined surface temperature of work-piece. In this research, the temperature of machined surface and the chip-tool interface in orthogonal cutting process of Ti-6Al-4V were...
متن کاملNew version of Gram-Schmidt Process with inverse for Signal and Image Processing
The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly independent vectors, matrices, etc) into an orthonormal basis (a set of orthogonal, unit-length vectors, bi or tri dimensional matrices). The process consists of taking each array and then subtracting the projections in common with the previous arrays. This paper introduces an enhanced version of the Gra...
متن کاملOrthogonality preserving mappings on inner product C* -modules
Suppose that A is a C^*-algebra. We consider the class of A-linear mappins between two inner product A-modules such that for each two orthogonal vectors in the domain space their values are orthogonal in the target space. In this paper, we intend to determine A-linear mappings that preserve orthogonality. For this purpose, suppose that E and F are two inner product A-modules and A+ is the set o...
متن کامل